223 research outputs found

    Variable Precision Rough Set Model for Incomplete Information Systems and Its Beta-Reducts

    Get PDF
    As the original rough set model is quite sensitive to noisy data, Ziarko proposed the variable precision rough set (VPRS) model to deal with noisy data and uncertain information. This model allowed for some degree of uncertainty and misclassification in the mining process. In this paper, the variable precision rough set model for an incomplete information system is proposed by combining the VPRS model and incomplete information system, and the beta-lower and beta-upper approximations are defined. Considering that classical VPRS model lacks a feasible method to determine the precision parameter beta when calculating the beta-reducts, we present an approach to determine the parameter beta. Then, by calculating discernibility matrix and discernibility functions based on beta-lower approximation, the beta-reducts and the generalized decision rules are obtained. Finally, a concrete example is given to explain the validity and practicability of beta-reducts which is proposed in this paper

    Determination of Chlormequat and Mepiquat Residues in Tomato Plants Using Accelerated Solvent Extraction-Ultra-Performance Liquid Chromatography-Tandem Mass Spectrometry

    Get PDF
    An Accelerated-Solvent Extraction-Ultra performance Liquid Chromatography-Tandem Mass Spectrometry (ASE-UPLC-MS/MS) method using purified water as extraction solvent for quantitative analysis of chromequat (CQ) and mepiquat (MQ) in samples of tomato plants with higher sensibility and shorter extraction time was developed. The CQ and MQ residues and their dissipation rate were both covered in this paper. The limits of detection (S/N>3) and limits of quantitation (S/N>10) for CQ and MQ were 0.02 μg/kg and 0.1 μg/kg respectively. The linear range was 0.2~10 μg/kg and the correlation coefficients (r2) was no less than 0.9990, The average recoveries of CQ and MQ from tomato root, stem and leaf in the three spiked range of 1.0, 2.0 and 5.0 μg/kg were in the range of 100.0%~118.8% and 93.2%~110.7% respectively. The dissipation experiment showed that, on average, 98.8% of CQ residues and 99.7% of MQ residues had dissipated after 33 days, with a half-life of 3.67d and 3.66d, which can provide with guideline for using CQ and MQ on tomato in safe range.Key words: Tomato plants; Accelerated solvent extraction; Ultra-performance liquid chromatography-tandem mass spectrometry; Chlormequat; Mepiqua

    Contrastive Attraction and Contrastive Repulsion for Representation Learning

    Full text link
    Contrastive learning (CL) methods effectively learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples via a one-vs-many softmax cross-entropy loss. By leveraging large amounts of unlabeled image data, recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet. However, most of them consider the augmented views from the same instance are positive pairs, while views from other instances are negative ones. Such binary partition insufficiently considers the relation between samples and tends to yield worse performance when generalized on images in the wild. In this paper, to further improve the performance of CL and enhance its robustness on various datasets, {we propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups}. We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples. Theoretical analysis reveals that CACR generalizes CL's behavior by positive attraction and negative repulsion, and it further considers the intra-contrastive relation within the positive and negative pairs to narrow the gap between the sampled and true distribution, which is important when datasets are less curated. With our extensive experiments, CACR not only demonstrates good performance on CL benchmarks, but also shows better robustness when generalized on imbalanced image datasets. Code and pre-trained checkpoints are available at https://github.com/JegZheng/CACR-SSL

    Adjustment of Synchronization Stability of Dynamic Brain-Networks Based on Feature Fusion

    Get PDF
    When the brain is active, the neural activities of different regions are integrated on various spatial and temporal scales; this is termed the synchronization phenomenon in neurobiological theory. This synchronicity is also the main underlying mechanism for information integration and processing in the brain. Clinical medicine has found that some of the neurological diseases that are difficult to cure have deficiencies or abnormalities in the whole or local integration processes of the brain. By studying the synchronization capabilities of the brain-network, we can intensively describe and characterize both the state of the interactions between brain regions and their differences between people with a mental illness and a set of controls by measuring the rapid changes in brain activity in patients with psychiatric disorders and the strength and integrity of their entire brain network. This is significant for the study of mental illness. Because static brain network connection methods are unable to assess the dynamic interactions within the brain, we introduced the concepts of dynamics and variability in a constructed EEG brain functional network based on dynamic connections, and used it to analyze the variability in the time characteristics of the EEG functional network. We used the spectral features of the brain network to extract its synchronization features and used the synchronization features to describe the process of change and the differences in the brain network's synchronization ability between a group of patients and healthy controls during a working memory task. We propose a method based on the fusion of traditional features and spectral features to achieve an adjustment of the patient's brain network synchronization ability, so that its synchronization ability becomes consistent with that of healthy controls, theoretically achieving the purpose of the treatment of the diseases. Studying the stability of brain network synchronization can provide new insights into the pathogenic mechanism and cure of mental diseases and has a wide range of potential applications

    Operating Conditions of Hollow Fiber Supported Liquid Membrane for Phenol Extraction from Coal Gasification Wastewater

    Get PDF
    The extraction and recycling of phenol from high concentration coal gasification wastewater has been studied using polypropylene (PP) hollow fiber membrane and polyvinylidene fluoride (PVDF) hollow fiber membrane as liquid membrane support, the mixture of tributyl phosphate (TBP) and kerosene as liquid membrane phase, and sodium hydroxide as stripping agent in the process of extraction. The experiments investigated the effect of the operating conditions of the hollow fiber supported liquid membrane, such as aqueous phase temperature and the connection forms of membrane modules, on the extraction efficiency of phenol from high concentration coal gasification wastewater. The conclusions obtained from lab scale experiments provided guidance for scale-up experiments. So, in the scale-up experiments, three membrane modules connected in parallel, then three membrane modules connected in series were used to increase the treatment capacity and improve the treatment effect, under the operating conditions of wastewater temperature 20 ËšC, PH 7.5~8.1, flow rate 100 L/h and the concentration of stripping phase 0.1 mol/L, stripping phase flow rate 50 L/h, the extraction efficiency of the PP-TBP supported liquid membrane system was 87.02% and the phenol concentration of effluent was 218.14mg/L. And the phenol concentration of effluent met the requirements of further biodegradation treatment

    Establishment and application of a VP3 antigenic domain-based peptide ELISA for the detection of antibody against goose plague virus infection

    Get PDF
    The detection of antibody against goose plague virus (GPV) infection has never had a commercialized test kit, which has posed challenges to the prevention and control of this disease. In this study, bioinformatics software was used to analyze and predict the dominant antigenic regions of the main protective antigen VP3 of GPV. Three segments of bovine serum albumin (BSA) vector-coupled peptides were synthesized as ELISA coating antigens. Experimental results showed that the VP3-1 (358-392aa) peptide had the best reactivity and specificity. By using the BSA-VP3-1 peptide, a detection method for antibody against GPV infection was established, demonstrating excellent specificity with no cross-reactivity with common infectious goose pathogen antibodies. The intra-batch coefficient of variation and inter-batch coefficient of variation were both less than 7%, indicating good stability and repeatability. The dynamic antibody detection results of gosling vaccines and the testing of 120 clinical immune goose serum samples collectively demonstrate that BSA-VP3-1 peptide ELISA can be used to detect antibody against GPV in the immunized goose population and has higher sensitivity than traditional agar gel precipitation methods. Taken together, the developed peptide-ELISA based on VP3 358-392aa could be useful in laboratory viral diagnosis, routine surveillance in goose farms. The main application of the peptide-ELISA is to monitor the antibody level and vaccine efficacy for GPV, which will help the prevention and control of gosling plague

    Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI

    Full text link
    Influenced by the great success of deep learning via cloud computing and the rapid development of edge chips, research in artificial intelligence (AI) has shifted to both of the computing paradigms, i.e., cloud computing and edge computing. In recent years, we have witnessed significant progress in developing more advanced AI models on cloud servers that surpass traditional deep learning models owing to model innovations (e.g., Transformers, Pretrained families), explosion of training data and soaring computing capabilities. However, edge computing, especially edge and cloud collaborative computing, are still in its infancy to announce their success due to the resource-constrained IoT scenarios with very limited algorithms deployed. In this survey, we conduct a systematic review for both cloud and edge AI. Specifically, we are the first to set up the collaborative learning mechanism for cloud and edge modeling with a thorough review of the architectures that enable such mechanism. We also discuss potentials and practical experiences of some on-going advanced edge AI topics including pretraining models, graph neural networks and reinforcement learning. Finally, we discuss the promising directions and challenges in this field.Comment: 20 pages, Transactions on Knowledge and Data Engineerin
    • …
    corecore